Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [2]:
import subprocess
print(subprocess.run(["pip install tqdm"], stdout=subprocess.PIPE, shell=True).stdout.decode())
#print(subprocess.run(["pip uninstall -y tqdm"], stdout=subprocess.PIPE, shell=True).stdout.decode())
Collecting tqdm
  Downloading tqdm-4.11.2-py2.py3-none-any.whl (46kB)
Installing collected packages: tqdm
Successfully installed tqdm-4.11.2

In [3]:
data_dir = '/input'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [ ]:
 
In [5]:
import helper
data_dir = '/input'

show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[5]:
<matplotlib.image.AxesImage at 0x7fa39b30b2b0>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [6]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[6]:
<matplotlib.image.AxesImage at 0x7fa39b20bb70>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [8]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    input_real = tf.placeholder(tf.float32, 
                                shape=(None, image_width, image_height, image_channels ), 
                                name='input_real')
    
    input_z = tf.placeholder(tf.float32, 
                             shape=(None, z_dim ), 
                             name='input_z')
    
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    

    return input_real, input_z, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).

In [41]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    def LReLU(input, alphaD):
        return tf.maximum(alphaD*input, input)
        
        
        
    with tf.variable_scope('discriminator', reuse=reuse):
        
        #alphaD=0.2
        
        #input layer is 28x28x1 in MNIST or 28x28x3 in celebA - discriminator layer 1
        dis_layer1 = tf.layers.conv2d(images, 128, 5, strides=2, padding='same',
                                     kernel_initializer=tf.contrib.layers.xavier_initializer(uniform=False, seed = None,
                                                                                            dtype=tf.float32))
        dis_layer1 = tf.layers.batch_normalization(dis_layer1, training=True)
        relu1 =  LReLU(dis_layer1, 0.1)                                       #tf.maximum(alphaD*normalization1, normalization1)
        
        
        #input layer is 14x14x128 in MNIST or 14x14x128 in celebA  - discriminator layer 2
        dis_layer2 = tf.layers.conv2d(relu1, 256, 5, strides=2, padding='same',
                                     kernel_initializer=tf.contrib.layers.xavier_initializer(uniform=False, seed = None,
                                                                                            dtype=tf.float32))
        dis_layer2 = tf.layers.batch_normalization(dis_layer2, training=True)
        relu2 =  LReLU(dis_layer2, 0.1) # tf.maximum(alphaD*normalization2, normalization2)        
       
        """    
        #input layer is 7x7x256 in MNIST or 7x7x256 in celebA  - discriminator layer 3
        dis_layer3 = tf.layers.conv2d(relu2, 512, 5, strides=1, padding='same',
                                     kernel_initializer=tf.contrib.layers.xavier_initializer(uniform=False, seed = None,
                                                                                            dtype=tf.float32))  #stride=1 doesnt change width and height
        dis_layer3=tf.layers.batch_normalization(dis_layer3, training=True)
        relu3 =  LReLU(dis_layer3, 0.2) # tf.maximum(alphaD*normalization3, normalization3)
        """        
        
        #flatten layer
        #flat = tf.reshape(relu2, (-1,7*7*512))
        flat = tf.contrib.layers.flatten(relu2)
        flat = tf.nn.dropout(flat, keep_prob=0.5)
        
        # convert the flat tensor to a logit
        logits = tf.layers.dense(flat,1)  
        
        #produce output with act function sigmoid
        out = tf.sigmoid(logits)
        

        return out, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [42]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    #alphaG = 0.2
    
    def LReLU(input, alphaG):
        return tf.maximum(alphaG*input, input)
    
    with tf.variable_scope('generator', reuse = not is_train):
        #first fully conected layer
        gen_layer1 = tf.layers.dense(z, 14*14*256)
        #reshape
        gen_layer1 = tf.reshape(gen_layer1,(-1,14,14,256))
        #bacth normalization
        gen_layer1 = tf.layers.batch_normalization(gen_layer1, training =  is_train)
        #Leak ReLu
        gen_layer1 = LReLU(gen_layer1, 0.2)  #tf.maximum(alphaG*gen_layer1, gen_layer1)
        # now there is a 14 x 14 x 256 normalized and leak Relu activated tensor
        
       
        #second generation layer - reduce the deep from 256 to 128
        gen_layer2 = tf.layers.conv2d_transpose(gen_layer1, 128, 5, strides=2, padding='same')  #up sampling width an height
        gen_layer2 = tf.layers.batch_normalization(gen_layer2, training = is_train) 
        gen_layer2 = LReLU(gen_layer2, 0.2)  
        # 14 x 14 x 256
  
        #third generation layer - reduce the deep from 128 to 64
        gen_layer3 = tf.layers.conv2d_transpose(gen_layer2, 64, 5, strides=1, padding='same')  #up sampling width an height
        gen_layer3 = tf.layers.batch_normalization(gen_layer3, training = is_train) 
        gen_layer3 = LReLU(gen_layer3, 0.2)  
        # 28 x 28 x 128
        """
        #fourth generation layer - reduce the deep from 128 to 64
        gen_layer4 = tf.layers.conv2d_transpose(gen_layer3, 64, 5, strides=1, padding='same')  #up sampling width an height
        gen_layer4 = tf.layers.batch_normalization(gen_layer4, training = is_train) 
        gen_layer4 = LReLU(gen_layer4, 0.2)
        # 28 x 28 x 64
        """
        
        #output generation layer
        logits = tf.layers.conv2d(gen_layer3, out_channel_dim, 5, strides=1, padding='same')  #same width and height
        
        out = tf.tanh(logits)#/tf.constant(2.0)
        
        
        
    
        return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [43]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    alpha = 0.1
    
    #Generator model
    g_model = generator(input_z, out_channel_dim)
    #Discriminator model for real data
    d_model_real, d_logits_real = discriminator(input_real)
    #Discriminator model for fake data
    d_model_fake, d_logits_fake = discriminator(g_model, reuse = True)
    
    d_loss_real = tf.reduce_mean(
                      tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
    
    d_loss_fake = tf.reduce_mean(
                      tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    
    g_loss = tf.reduce_mean(
                      tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
    
    d_loss = d_loss_real + d_loss_fake
    
    
    return d_loss, g_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [44]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
  
    #get weights and bias to update - variables
    trainable_variables = tf.trainable_variables()
    d_vars = [var for var in trainable_variables if var.name.startswith('discriminator')]
    g_vars = [var for var in trainable_variables if var.name.startswith('generator')]
    
    #optimize
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1 = beta1).minimize(d_loss, var_list=d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1 = beta1).minimize(g_loss, var_list=g_vars)
    
    return d_train_opt, g_train_opt


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [45]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [38]:
import os

list = os.listdir(r'C:\PERSONAL\UDACITY\NDegreeDL\semana 5\Project\face_generation\data\mnist') # dir is your directory path
len_mnist = number_files = len(list)
print (len_mnist)
60000
In [30]:
import os

list = os.listdir(r'C:\PERSONAL\UDACITY\NDegreeDL\semana 5\Project\face_generation\data\img_align_celeba') # dir is your directory path
len_faces = number_files = len(list)
print (len_faces)
202599
In [14]:
from time import gmtime, strftime
In [51]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    steps = 0
    _, image_width, image_height, image_channels = data_shape
    input_real, input_z, lr = model_inputs(image_width, image_height,image_channels, z_dim)
    d_loss, g_loss = model_loss(input_real, input_z,image_channels)
    d_opt, g_opt = model_opt(d_loss, g_loss, lr, beta1)
    
    is_train = tf.placeholder(tf.bool)
        
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                steps += 1
                batch_images = batch_images * 2
                
                # Sample random noise for G
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                 # Run optimizers
                _ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr:learning_rate})
                #_ = sess.run(g_opt, feed_dict={input_z: batch_z, lr:learning_rate})
                #_ = sess.run(g_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr:learning_rate,is_train:True})
                _ = sess.run(g_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr:learning_rate,is_train:False})
                
                if steps % 100 == 0:
                    # At the end of each epoch, get the losses and print them out
                    #print("batch No.: ", steps, "time: ", strftime("%Y-%m-%d %H:%M:%S", gmtime()))
                    train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
                    train_loss_g = g_loss.eval({input_z: batch_z})

                    print("Epoch {}/{}...".format(epoch_i+1, epoch_count),
                          "Discriminator Loss: {:.4f}...".format(train_loss_d),
                          "Generator Loss: {:.4f}".format(train_loss_g),"    ", "batch No.: ", 
                          steps)

                if steps % 200== 0:
                    #Print the current image
                    show_generator_output(sess, 36, input_z, image_channels, data_image_mode)

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [52]:
batch_size = 32
z_dim = 100
learning_rate = 0.00005
beta1 = 0.4


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2


mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
    
Epoch 1/2... Discriminator Loss: 1.0895... Generator Loss: 0.8852      batch No.:  100
Epoch 1/2... Discriminator Loss: 1.2890... Generator Loss: 0.7269      batch No.:  200
Epoch 1/2... Discriminator Loss: 1.2181... Generator Loss: 1.3584      batch No.:  300
Epoch 1/2... Discriminator Loss: 1.2001... Generator Loss: 0.8601      batch No.:  400
Epoch 1/2... Discriminator Loss: 1.0376... Generator Loss: 0.8574      batch No.:  500
Epoch 1/2... Discriminator Loss: 0.9637... Generator Loss: 1.3054      batch No.:  600
Epoch 1/2... Discriminator Loss: 0.8709... Generator Loss: 1.7450      batch No.:  700
Epoch 1/2... Discriminator Loss: 0.9229... Generator Loss: 1.6244      batch No.:  800
Epoch 1/2... Discriminator Loss: 0.9058... Generator Loss: 1.0471      batch No.:  900
Epoch 1/2... Discriminator Loss: 0.9687... Generator Loss: 1.1773      batch No.:  1000
Epoch 1/2... Discriminator Loss: 1.1049... Generator Loss: 0.9726      batch No.:  1100
Epoch 1/2... Discriminator Loss: 1.0295... Generator Loss: 1.8963      batch No.:  1200
Epoch 1/2... Discriminator Loss: 1.0687... Generator Loss: 1.4903      batch No.:  1300
Epoch 1/2... Discriminator Loss: 1.2378... Generator Loss: 0.7016      batch No.:  1400
Epoch 1/2... Discriminator Loss: 1.1758... Generator Loss: 0.7910      batch No.:  1500
Epoch 1/2... Discriminator Loss: 1.0103... Generator Loss: 0.9913      batch No.:  1600
Epoch 1/2... Discriminator Loss: 1.4111... Generator Loss: 0.9155      batch No.:  1700
Epoch 1/2... Discriminator Loss: 1.2347... Generator Loss: 0.6082      batch No.:  1800
Epoch 2/2... Discriminator Loss: 0.9394... Generator Loss: 1.5381      batch No.:  1900
Epoch 2/2... Discriminator Loss: 0.9186... Generator Loss: 1.1911      batch No.:  2000
Epoch 2/2... Discriminator Loss: 0.8507... Generator Loss: 0.9873      batch No.:  2100
Epoch 2/2... Discriminator Loss: 0.9817... Generator Loss: 1.0981      batch No.:  2200
Epoch 2/2... Discriminator Loss: 0.8312... Generator Loss: 1.8367      batch No.:  2300
Epoch 2/2... Discriminator Loss: 1.0447... Generator Loss: 0.5956      batch No.:  2400
Epoch 2/2... Discriminator Loss: 0.9924... Generator Loss: 1.1213      batch No.:  2500
Epoch 2/2... Discriminator Loss: 1.1032... Generator Loss: 1.0961      batch No.:  2600
Epoch 2/2... Discriminator Loss: 0.8130... Generator Loss: 0.8327      batch No.:  2700
Epoch 2/2... Discriminator Loss: 0.7795... Generator Loss: 1.3556      batch No.:  2800
Epoch 2/2... Discriminator Loss: 0.9558... Generator Loss: 0.8874      batch No.:  2900
Epoch 2/2... Discriminator Loss: 0.9976... Generator Loss: 1.8697      batch No.:  3000
Epoch 2/2... Discriminator Loss: 0.8448... Generator Loss: 1.1504      batch No.:  3100
Epoch 2/2... Discriminator Loss: 0.7711... Generator Loss: 1.0586      batch No.:  3200
Epoch 2/2... Discriminator Loss: 1.1880... Generator Loss: 0.9460      batch No.:  3300
Epoch 2/2... Discriminator Loss: 0.8381... Generator Loss: 1.4044      batch No.:  3400
Epoch 2/2... Discriminator Loss: 1.0190... Generator Loss: 1.0881      batch No.:  3500
Epoch 2/2... Discriminator Loss: 1.1358... Generator Loss: 1.6532      batch No.:  3600
Epoch 2/2... Discriminator Loss: 1.0070... Generator Loss: 1.5295      batch No.:  3700

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [53]:
batch_size = 32
z_dim = 100
learning_rate = 0.0001
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
Epoch 1/1... Discriminator Loss: 0.8900... Generator Loss: 1.2341      batch No.:  100
Epoch 1/1... Discriminator Loss: 1.0972... Generator Loss: 1.2360      batch No.:  200
Epoch 1/1... Discriminator Loss: 1.1802... Generator Loss: 1.1574      batch No.:  300
Epoch 1/1... Discriminator Loss: 1.6445... Generator Loss: 0.6769      batch No.:  400
Epoch 1/1... Discriminator Loss: 1.1514... Generator Loss: 0.9667      batch No.:  500
Epoch 1/1... Discriminator Loss: 1.5101... Generator Loss: 0.7360      batch No.:  600
Epoch 1/1... Discriminator Loss: 1.3548... Generator Loss: 1.1755      batch No.:  700
Epoch 1/1... Discriminator Loss: 1.5784... Generator Loss: 0.8216      batch No.:  800
Epoch 1/1... Discriminator Loss: 1.3495... Generator Loss: 0.9777      batch No.:  900
Epoch 1/1... Discriminator Loss: 1.5051... Generator Loss: 0.7915      batch No.:  1000
Epoch 1/1... Discriminator Loss: 1.6732... Generator Loss: 0.8466      batch No.:  1100
Epoch 1/1... Discriminator Loss: 1.3149... Generator Loss: 0.9288      batch No.:  1200
Epoch 1/1... Discriminator Loss: 1.2724... Generator Loss: 1.0025      batch No.:  1300
Epoch 1/1... Discriminator Loss: 1.6205... Generator Loss: 0.7461      batch No.:  1400
Epoch 1/1... Discriminator Loss: 1.2791... Generator Loss: 0.8098      batch No.:  1500
Epoch 1/1... Discriminator Loss: 1.3852... Generator Loss: 0.6253      batch No.:  1600
Epoch 1/1... Discriminator Loss: 1.2474... Generator Loss: 1.0127      batch No.:  1700
Epoch 1/1... Discriminator Loss: 1.2782... Generator Loss: 0.8534      batch No.:  1800
Epoch 1/1... Discriminator Loss: 1.4327... Generator Loss: 0.8124      batch No.:  1900
Epoch 1/1... Discriminator Loss: 1.5271... Generator Loss: 0.6836      batch No.:  2000
Epoch 1/1... Discriminator Loss: 1.3528... Generator Loss: 0.7435      batch No.:  2100
Epoch 1/1... Discriminator Loss: 1.3981... Generator Loss: 0.9080      batch No.:  2200
Epoch 1/1... Discriminator Loss: 1.1265... Generator Loss: 0.8401      batch No.:  2300
Epoch 1/1... Discriminator Loss: 1.4221... Generator Loss: 0.6560      batch No.:  2400
Epoch 1/1... Discriminator Loss: 0.8243... Generator Loss: 1.5872      batch No.:  2500
Epoch 1/1... Discriminator Loss: 1.1270... Generator Loss: 0.6557      batch No.:  2600
Epoch 1/1... Discriminator Loss: 1.3014... Generator Loss: 0.5850      batch No.:  2700
Epoch 1/1... Discriminator Loss: 1.4368... Generator Loss: 0.4079      batch No.:  2800
Epoch 1/1... Discriminator Loss: 1.1515... Generator Loss: 0.6414      batch No.:  2900
Epoch 1/1... Discriminator Loss: 0.9366... Generator Loss: 0.8286      batch No.:  3000
Epoch 1/1... Discriminator Loss: 0.6352... Generator Loss: 3.7697      batch No.:  3100
Epoch 1/1... Discriminator Loss: 0.8765... Generator Loss: 2.2478      batch No.:  3200
Epoch 1/1... Discriminator Loss: 1.0288... Generator Loss: 1.1006      batch No.:  3300
Epoch 1/1... Discriminator Loss: 0.6511... Generator Loss: 2.4333      batch No.:  3400
Epoch 1/1... Discriminator Loss: 1.0868... Generator Loss: 0.7550      batch No.:  3500
Epoch 1/1... Discriminator Loss: 0.3187... Generator Loss: 1.9765      batch No.:  3600
Epoch 1/1... Discriminator Loss: 1.3871... Generator Loss: 0.4310      batch No.:  3700
Epoch 1/1... Discriminator Loss: 0.4360... Generator Loss: 2.8935      batch No.:  3800
Epoch 1/1... Discriminator Loss: 1.8266... Generator Loss: 1.8706      batch No.:  3900
Epoch 1/1... Discriminator Loss: 1.4481... Generator Loss: 0.8642      batch No.:  4000
Epoch 1/1... Discriminator Loss: 0.9209... Generator Loss: 3.9825      batch No.:  4100
Epoch 1/1... Discriminator Loss: 0.8931... Generator Loss: 2.1234      batch No.:  4200
Epoch 1/1... Discriminator Loss: 1.1060... Generator Loss: 0.9838      batch No.:  4300
Epoch 1/1... Discriminator Loss: 1.1079... Generator Loss: 0.7348      batch No.:  4400
Epoch 1/1... Discriminator Loss: 0.7372... Generator Loss: 1.9022      batch No.:  4500
Epoch 1/1... Discriminator Loss: 1.1391... Generator Loss: 0.8596      batch No.:  4600
Epoch 1/1... Discriminator Loss: 0.4626... Generator Loss: 2.8546      batch No.:  4700
Epoch 1/1... Discriminator Loss: 0.7718... Generator Loss: 1.1721      batch No.:  4800
Epoch 1/1... Discriminator Loss: 0.7707... Generator Loss: 1.5562      batch No.:  4900
Epoch 1/1... Discriminator Loss: 0.4888... Generator Loss: 1.7497      batch No.:  5000
Epoch 1/1... Discriminator Loss: 1.4023... Generator Loss: 0.7558      batch No.:  5100
Epoch 1/1... Discriminator Loss: 1.5506... Generator Loss: 1.0713      batch No.:  5200
Epoch 1/1... Discriminator Loss: 0.4424... Generator Loss: 1.9805      batch No.:  5300
Epoch 1/1... Discriminator Loss: 1.2790... Generator Loss: 0.8282      batch No.:  5400
Epoch 1/1... Discriminator Loss: 1.1574... Generator Loss: 0.6482      batch No.:  5500
Epoch 1/1... Discriminator Loss: 0.8618... Generator Loss: 1.5627      batch No.:  5600
Epoch 1/1... Discriminator Loss: 1.5609... Generator Loss: 0.6274      batch No.:  5700
Epoch 1/1... Discriminator Loss: 1.5638... Generator Loss: 1.0592      batch No.:  5800
Epoch 1/1... Discriminator Loss: 1.2624... Generator Loss: 0.7261      batch No.:  5900
Epoch 1/1... Discriminator Loss: 1.0395... Generator Loss: 0.8073      batch No.:  6000
Epoch 1/1... Discriminator Loss: 0.7348... Generator Loss: 2.1889      batch No.:  6100
Epoch 1/1... Discriminator Loss: 0.9523... Generator Loss: 1.1559      batch No.:  6200
Epoch 1/1... Discriminator Loss: 0.7212... Generator Loss: 1.3321      batch No.:  6300

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.